27 research outputs found
Explicit measurements with almost optimal thresholds for compressed sensing
We consider the deterministic construction of a measurement
matrix and a recovery method for signals that are block
sparse. A signal that has dimension N = nd, which consists
of n blocks of size d, is called (s, d)-block sparse if
only s blocks out of n are nonzero. We construct an explicit
linear mapping Φ that maps the (s, d)-block sparse signal
to a measurement vector of dimension M, where s•d <N(1-(1-M/N)^(d/(d+1))-o(1).
We show that if the (s, d)-
block sparse signal is chosen uniformly at random then the
signal can almost surely be reconstructed from the measurement
vector in O(N^3) computations
Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays
Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linear-programming-based methods, and can also recover signals with less sparsity
4-Cycle Free Spatially Coupled LDPC Codes with an Explicit Construction
Spatially coupled low-density parity-check (SC-LDPC) codes are a class of
capacity approaching LDPC codes with low message recovery latency when a
sliding window decoding is used. In this paper, we first present a new method
for the construction of a class of SC-LDPC codes by the incidence matrices of a
given non-negative integer matrix , and then the relationship of 4-cycles
between matrix and the corresponding SC-LDPC code are investigated.
Finally, by defining a new class of integer finite sequences, called {\it good
sequences}, for the first time, we give an explicit method for the construction
of a class of 4-cycle free SC-LDPC codes that can achieve (in most cases) the
minimum coupling width
On the reconstruction of block-sparse signals with an optimal number of measurements
Let A be an M by N matrix (M < N) which is an instance of a real random
Gaussian ensemble. In compressed sensing we are interested in finding the
sparsest solution to the system of equations A x = y for a given y. In general,
whenever the sparsity of x is smaller than half the dimension of y then with
overwhelming probability over A the sparsest solution is unique and can be
found by an exhaustive search over x with an exponential time complexity for
any y. The recent work of Cand\'es, Donoho, and Tao shows that minimization of
the L_1 norm of x subject to A x = y results in the sparsest solution provided
the sparsity of x, say K, is smaller than a certain threshold for a given
number of measurements. Specifically, if the dimension of y approaches the
dimension of x, the sparsity of x should be K < 0.239 N. Here, we consider the
case where x is d-block sparse, i.e., x consists of n = N / d blocks where each
block is either a zero vector or a nonzero vector. Instead of L_1-norm
relaxation, we consider the following relaxation min x \| X_1 \|_2 + \| X_2
\|_2 + ... + \| X_n \|_2, subject to A x = y where X_i = (x_{(i-1)d+1},
x_{(i-1)d+2}, ..., x_{i d}) for i = 1,2, ..., N. Our main result is that as n
-> \infty, the minimization finds the sparsest solution to Ax = y, with
overwhelming probability in A, for any x whose block sparsity is k/n < 1/2 -
O(\epsilon), provided M/N > 1 - 1/d, and d = \Omega(\log(1/\epsilon)/\epsilon).
The relaxation can be solved in polynomial time using semi-definite
programming